![]() High-integrity optical pose estimation using coded features
专利摘要:
A head tracking system uses coded features, highly structured compared to the operating environment, to ensure a high-integrity correspondence map. Coded features can be used to provide a negligible probability of spurious features and probability of misidentification. Ensuring a reliable correspondence map prevents unmodeled errors that arise from an invalid correspondence map. In the case of multiple outliers, existing screening techniques, such as random sample consensus (RANSAC) or fault exclusion, may be used to eliminate excessive outliers. Mature GPS integrity techniques may then be extended to optical pose estimation to establish integrity bounds for single faults that may go undetected by fault detection. 公开号:EP3690734A1 申请号:EP20154927.6 申请日:2020-01-31 公开日:2020-08-05 发明作者:Christopher M. Boggs;William T. Kirchner;Ankur ANKUR 申请人:Rockwell Collins Inc; IPC主号:G06T7-00
专利说明:
[0001] Optical pose estimation is used increasingly in avionics applications such as optical head trackers for head-worn displays (HWD) that show flight guidance cues to pilots or image-based relative navigation for platform-based terminal guidance and formation flying. Computer vision-based solutions are attractive because they offer low SWAP-C and availability when other sensor modalities are not available (e.g., GPS, Radio, magnetometer). However, for an optical pose estimate to be used in a safety-critical application, the pose estimate must also include integrity bounds that overbound the pose estimation error with high confidence to prevent hazardous, misleading information. [0002] Model-based pose estimation is typically performed by adjusting the pose estimate such that it minimizes reprojection error between measured 2D points (detected in the image) and projected 2D feature points (determined by projecting a constellation of 3D points into the image plane). The correspondence mapping between measured 2D points and constellation 3D points is typically not known, and a high number of spurious correspondence pairings are common. The large number of combinations of potential correspondence maps makes error modeling challenging, if not intractable. However, given that the correspondence map is correct, error modeling simplifies drastically. SUMMARY [0003] In one aspect, embodiments of the inventive concepts disclosed herein are directed to coded features, highly structured compared to the operating environment, to ensure a high-integrity correspondence map. Coded features can be used to provide a negligible probability of spurious features and probability of misidentification. Ensuring a reliable correspondence map prevents unmodeled errors that arise from an invalid correspondence map. In the case of multiple outliers, existing screening techniques, such as random sample consensus (RANSAC) or fault exclusion, may be used to eliminate excessive outliers. Mature GPS integrity techniques may then be extended to optical pose estimation to establish integrity bounds for single faults that may go undetected by fault detection. [0004] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and should not restrict the scope of the claims. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments of the inventive concepts disclosed herein and together with the general description, serve to explain the principles. BRIEF DESCRIPTION OF THE DRAWINGS [0005] The numerous advantages of the embodiments of the inventive concepts disclosed herein may be better understood by those skilled in the art by reference to the accompanying figures in which: FIG. 1 shows a block diagram of a system for implementing exemplary embodiments of the inventive concepts disclosed herein; FIG. 2 shows an environmental view of a helmet including fiducials with dimensional features according to an exemplary embodiment of the inventive concepts disclosed herein; FIG. 3 shows fiducials with dimensional features according to exemplary embodiments of the inventive concepts disclosed herein; FIG. 4A shows an environmental view of an aircraft cockpit including an exemplary embodiment of the inventive concepts disclosed herein; FIG. 4B shows an environmental view of an aircraft cockpit including an exemplary embodiment of the inventive concepts disclosed herein; FIG. 4C shows an environmental view of an aircraft cockpit including an exemplary embodiment of the inventive concepts disclosed herein; FIG. 5 shows a block diagram representation of various frames of reference used to calculate a head pose according to exemplary embodiments of the inventive concepts disclosed herein; and FIG. 6 shows a flowchart of a method for determining a head pose according to an exemplary embodiment of the inventive concepts disclosed herein. DETAILED DESCRIPTION [0006] Before explaining at least one embodiment of the inventive concepts disclosed herein in detail, it is to be understood that the inventive concepts are not limited in their application to the details of construction and the arrangement of the components or steps or methodologies set forth in the following description or illustrated in the drawings. In the following detailed description of embodiments of the instant inventive concepts, numerous specific details are set forth in order to provide a more thorough understanding of the inventive concepts. However, it will be apparent to one of ordinary skill in the art having the benefit of the instant disclosure that the inventive concepts disclosed herein may be practiced without these specific details. In other instances, well-known features may not be described in detail to avoid unnecessarily complicating the instant disclosure. The inventive concepts disclosed herein are capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. [0007] As used herein a letter following a reference numeral is intended to reference an embodiment of the feature or element that may be similar, but not necessarily identical, to a previously described element or feature bearing the same reference numeral (e.g., 1, 1a, 1b). Such shorthand notations are used for purposes of convenience only, and should not be construed to limit the inventive concepts disclosed herein in any way unless expressly stated to the contrary. [0008] Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by anyone of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). [0009] In addition, use of the "a" or "an" are employed to describe elements and components of embodiments of the instant inventive concepts. This is done merely for convenience and to give a general sense of the inventive concepts, and "a" and "an" are intended to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. [0010] Finally, as used herein any reference to "one embodiment," or "some embodiments" means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the inventive concepts disclosed herein. The appearances of the phrase "in some embodiments" in various places in the specification are not necessarily all referring to the same embodiment, and embodiments of the inventive concepts disclosed may include one or more of the features expressly described or inherently present herein, or any combination of sub-combination of two or more such features, along with any other features which may not necessarily be expressly described or inherently present in the instant disclosure. [0011] Broadly, embodiments of the inventive concepts disclosed herein are directed to a pose tracking and verification system for determining poses from fiducials with coded dimensional features. While specific embodiments described herein are directed toward head tracking systems, the principles described are generally applicable to any system with one or more cameras rigidly affixed to one body and one or more fiducials with coded features rigidly affixed to a second body. [0012] Referring to FIG. 1, a block diagram of a system for implementing exemplary embodiments of the inventive concepts disclosed herein is shown. The system includes a processor 100, a memory 102 connected to the processor 100 for embodying processor executable code, a data storage element 104 storing data specific to one or more fiducials, and one or more cameras 106, 108, 110. The processor 100 is configured to receive images of the fiducials from the one or more cameras 106, 108, 110, retrieve data pertaining to each fiducial from the data storage element 104, and use the data make determinations about error probability of a resulting head pose calculation as more fully described herein. [0013] It will be appreciated that, while some embodiments described herein specifically refer to environmental cameras and fiducials affixed to a helmet (and vice-versa), all of the principles and methodologies disclosed are equally applicable to either type of embodiment. [0014] Referring to FIG. 2, an environmental view of a helmet 200 including fiducials 202, 204, 206 with dimensional features according to an exemplary embodiment of the inventive concepts disclosed herein is shown. In at least one embodiment, where the helmet 200 is intended for use in a head tracking system, the fiducials 202, 204, 206 are each disposed at defined locations on the helmet, and with defined orientations. Because the fiducials 202, 204, 206 include coded features, the identity each fiducials 202, 204, 206, and their specific locations and orientations are a critical factor. [0015] Referring to FIG. 3, fiducials 300, 302 with dimensional features according to exemplary embodiments of the inventive concepts disclosed herein is shown. In at least one embodiment, an ArUco based fiducial 300 may be used; in another embodiment, a quick response (QR) code fiducial 302 may be used. Such fiducials 300, 302 have dimensional features that allow them to function as more than simple point sources. The ith fiducial ID mi as a N-bit integer:m = m i 1 m i 2…m i N ∈ B N [0016] A "correspondence integrity event" (CIE) is the algorithm estimating a fiducial ID mest that is not equal to the true fiducial ID m 1 without flagging the fiducial ID as erroneous. CIE will be avoided when the algorithm: correctly identifies the fiducial, fails to detect/decode a fiducial, or identifies the fiducial ID as invalid. [0017] The correspondence integrity problem may then be stated to estimate an overbound of the probability of CIE: P CIE ≥ Pr m est ≠ m true [0018] Since the algorithm can verify that the fiducial ID estimate is in the list of known fiducial IDs in the constellation (determined during mapping), CIE will only occur when: m true ≠ m est [0019] Conceptually, any fiducial ID estimate mest ∈ BN that is "close" (or similar) to m 1 is much more likely than one that is "far" from m 1. Since these close fiducials are much more likely to be the estimated fiducial ID than uniformly randomly sampling BN , this means that there are also "far" fiducials, where the probability is lower than uniformly random. Assuming a suitable distance metric p(mi , mj ), for some distance R the probability will drop below uniformly random probability Prand :Pr m est | p m 1 m est > R < P rand [0020] In one exemplary embodiment, where a system comprises eleven fiducials and PCIE = 10-7:N = ceil log 210 10 − 7 = 27 bits [0021] This CIE requirement may also be satisfied using a Micro QR code, which supports ten numerals (33 bits). Micro QR codes allow for 1.6 times the number of pixels per module for the same fiducial width. The CIE requirement may also be met using a 6x6 ArUco fiducial 300. The lower fiducial density tends to provide better availability. [0022] This calculation is conservative in that it doesn't take account of how many bit flips actually need to occur to have CIE; most fiducial IDs will be much less likely to be the estimated than pure random. [0023] After selecting a fiducial type with enough bits, the next requirement is selecting a good set of fiducial IDs. First define a distance function ρ and a radius R outside which fiducial IDs are less likely than random. In practice, we only need to define a methodology to select well-spaced fiducials. One distance metric commonly used for sets of binary sequences is Hamming distance, the minimum number of bit flips required to convert one set member to another set member. The issue with Hamming distance for spatially coded fiducials is that it neglects the spatial correlation of bit flips. In one embodiment, fiducial sets are defined via a method to characterize how much of a difference between two fiducials can be generated using common-mode bit-flip failures versus requiring random bit flip failures. Possible causes of common-mode bit flips include, but are not limited to: illumination/shadow; lens/sensor localized effects; fiducial perspective/rotation/reflections; and systematic errors in detect/decode algorithms. Suitable criteria for selecting fiducial IDs with large inter-fiducial distances include: Hamming distance between fiducials, invariant to rotation or reflection; fiducial self-distance under rotation and reflection; number of bit transitions in a row (high spatial frequency reduces the likelihood of spatially correlated bit flips causing an integrity event); and number of times a row appears in the constellation. [0024] While exemplary fiducials described here include ArUco based fiducials 300 and QR code fiducials 302, any fiducials may be used provided they include features with sufficient data density to allow each fiducial to be uniquely identified and allow the orientation of the fiducial to be determined, and sufficient environmental distinction to minimize environmental interference. [0025] Referring to FIGS. 4A, 4B, and 4C, environmental views of an aircraft cockpit 400 including exemplary embodiments of the inventive concepts disclosed herein are shown. The aircraft cockpit 400 includes one or more cameras 402, 404, 406, 408 and a plurality of fiducials 410, 412, 414, each having coded features to distinctly identify each fiducial 410, 412, 414 and define its orientation. In at least one embodiment (such as in FIG. 4A) a single camera 404 may image all of the fiducials 410, 412, 414 within a field of view. In at least one alternative embodiment (such as in FIG. 4B) multiple cameras 402, 404, 406, 408 may be disposed to capture images from distinct locations and produce multiple head pose estimates as more fully defined herein. [0026] While FIGS. 4A and 4B show the plurality of fiducials 410, 412, 414 disposed on a helmet and the one or more cameras 402, 404, 406, 408 disposed on surfaces of the cockpit 400, alternative embodiments (such as in FIG. 4C) may include cameras disposed on a helmet and the fiducials disposed within the cockpit. Many of the mathematical operations described here are based on helmet mounted cameras 402 and cockpit disposed fiducials 410, 412, 414. A person skilled in the art will appreciate that certain modifications to the mathematical operations are necessary to account for the change in reference frame. [0027] Referring to FIG. 5, a block diagram representation of various frames of reference used to calculate a head pose according to exemplary embodiments of the inventive concepts disclosed herein is shown. In head tracking in a mobile environment, a head tracking system determines the position and orientation of a "head" frame 500 (generally references by h herein) with respect to a "platform" frame 502 (generally references by p herein). The platform frame 502 may be a vehicle interior such as a cockpit. [0028] In at least one embodiment, a camera is fixed to the head frame 500 and defines a "camera" frame 504. The camera measures fiducials, each within a "fiducial" frame 506, 508 via pixel coordinates within the captured images. An expected pixel location may be computed based on the known fiducial location, the intrinsic properties of the camera, the extrinsic properties of the rigid head-camera relative position and orientation 534, and the pose estimate of the head relative to the platform. The observed residual between the measured fiducial location and the expected fiducial location may be used to estimate the head pose. Pose estimation may be performed using a snapshot approach, loosely coupled optical-inertial, or tightly coupled optical inertial. [0029] In at least one embodiment, the platform frame 502 is a reference frame that describes the location of objects in a vehicle such as an aircraft, and is rigidly mounted to the aircraft. In general, it may be a fixed translation (φ) and rotation (r) from the aircraft body. The platform frame 502 may be defined consistent with SAE body frame with a forward/backward dimension (X) 530, a right/left dimension (Y) 528, and a down/up dimension (Z) 532. For convenience, the origin of the platform frame 502 may be selected at a "nominal" pilot head location such that a nominal head pose is intuitively all zeros. [0030] The head frame 500 is a reference frame that is rigidly attached to (or rigidly conforms to) a head-worn device being tracked. The pose estimation problem determines the pose of the head with respect to the platform. [0031] The camera frame 504 may be rigidly attached to the head-worn device, with fixed translation and rotation based on the head frame 500 via the rigid head-camera relative position and orientation 534. The camera frame 504 may be defined with the origin at the camera focal point and the X dimension 518 along a camera optical axis, with the Y dimension 516 and Z dimension 520 translated accordingly. The camera may be rigidly mounted to the aircraft while the head-worn device is rigidly mounted to the head. [0032] Each of a plurality of fiducials 506, 508 comprises an artifice with dimensional features such as a QR code which are planar and not singular points. Such fiducials 506, 508 may be located at a fixed positions and orientations 540, 542 within the platform frame 502. [0033] In at least one embodiment, the relative positions and orientations 540, 542 of the fiducials 506, 508 with respect to the platform frame 502, and the rigid head-camera relative position and orientation 534 of the camera frame 504 with respect to the head frame 500 are fixed such that, for the purposes of estimation calculations set forth herein, translation (φ) and rotation (r) associated with those relationships are invariant. Furthermore, the position and orientation 536 of the camera frame 504 with respect to each fiducial 506, 508 and the position and orientation 538 of the head frame 500 with respect to the platform frame 502 may vary over time and may be mathematically related to the fixed positions and orientations 534, 540, 542, and the X dimensions 512, 518, 524, 530, Y dimensions 510, 516, 522, 528, and Z dimensions 514, 520, 526, 532 of the respective frames 500, 503, 504 and fiducials 506, 508. It will be appreciated that all of the principles and methodologies described herein are applicable to embodiments where other of the position and orientation relationships 534, 536, 538, 540, 542 are fixed. [0034] In at least one embodiment, a linear camera calibration matrix is used:K = α xsx 0 0α y y 0 0 0 1 [0035] A Jacobian matrix of y with respect to intrinsic errors α may be determined by:y = α x x cv + sy cv + x 0 z cv α y y cv + y 0 z cv z cv = x cv0y cv z cv0 0y cv0 0z cv 0 0 0 0 0 α x α y s x 0 y 0 + 0 0 z cv = J I x I + 0 0 z cv [0036] Given an initial pose estimate, a processor may compute a more accurate pose estimate using nonlinear optimization to minimize reprojection errors. The search parameters for the optimization may be four quaternion states and three position states. If quaternions are used, an additional constraint is required on the quaternion norm. Alternatively, three Euler angles may be used to define a rotation from the coarse pose estimate. [0037] Where K̂ is an estimated camera calibration matrix, r m / p p [0038] Expanding x = PX̂p according to these quantities: p ˜ = p + δ p ˜ [0039] In at least one embodiment, the covariance ∑ ηz between two jointly distributed real-valued random variables X and Y with finite second moments is defined by: ∑ η z= Eη z − E η zη z − E η z T = E η z η z T − E η z E η zT [0040] The linearized measurement equation for the ith pixel measurement is:z i = H H i x H + H E i x E + H I i x I + H M i x M i + e P i [0041] Assuming upstream checks have ensured N ≥ Nmin ≥ 3,H ∈ ℜ 2 Nx 6 [0042] For convenience, the over-bars may be dropped in sections using normalized coordinates. When mixing normalized and unnormalized terms, the overbars should be used for clarity and unless otherwise stated, the unnormalized measurement equation should be assumed. [0043] When determining measurement covariance:var e i = R i [0044] The weighted least squares estimate with weighting matrix W minimizes:J =z − H x ^ T W z − H x ^ [0045] In general, snapshot pose estimation will yield a unique solution when Pinv is full-rank. This is ensured by the checks performed during the Cholesky inverse. The snapshot pose will be practically unobservable in the presence of noise when Pinv is ill-conditioned. This low observability, and other insights, are illustrated by considering the singular value decomposition of the measurement matrix. Taking the singular value decomposition of the measurement equation yields:H = USV T = U 1 S 1 V 1 T [0046] After performing weighted least squares, measurement residuals may be formed as:r = z − H x ^ [0047] Fault detection tests that the measurements are consistent with the fault-free H 0 hypothesis. Given that the measurement model is correct, the test statistic will follow a Chi-square distribution. If the measurement model is optimistic, the test statistic will tend to be larger than predicted. To detect faults, set a threshold such that when a test statistic is larger than expected, the test if flagged as failed. Because the Chi-square distribution goes to infinity, even if the measurement models are fault-free, there is always some probability that the test statistic will randomly be larger than the threshold. This is called the false-positive rate (FPR). Design tradeoffs for FPR requires a threshold Tmax that satisfies:Pr = T > T max = FPR [0048] The exact formulation of the protection levels (PLs) should be driven by requirements and/or best practices. Under the H 0 hypothesis, the weighted least square state estimate x̂ is distributed as:δx = x ^ − x ∼H 0N 0 P [0049] The single axis method does not account for correlation in the axes, which reduces the overall failure rate. Also, single axis error bounds are most useful when certain axes are important. To account for correlation using a multivariate approach, Chi-square approaches may be applied. Given a subset of dfs axes, xs , a Chi-square bound may be formulated using the covariance of that subset, Ps , as: δx s τ P s − 1δx s∼H 0 χ df s 2 [0050] Another approach to error bounding is to assume the head pose is an intermediate output for the head-worn display (HWD). The ultimate output of the HWD is georeferenced or body referenced flight guidance cues that are rendered onto pixels of the display. Errors in head pose will cause the flight guidance points to be rendered along an incorrect line of sight, which depending on the nature of the cue and phase of flight may be hazardously misleading. Given a linear mapping between pose errors and the change in pixel location of a flight guidance cue in the display: δp fg = H fg δx [0051] Such analysis provides error bounds assuming a fault free H 0 condition. In situations with fault injection, adopting methodologies analogous to receiver autonomous integrity monitoring (RAIM) techniques used in GPS theory may be useful. The unfaulted measurement model H 0 may be expanded to include an additive fault b as: H f : z = Hx + e + b , var e = I [0052] The residuals are inflated due to the bias projected onto the error space. Components of bias in the error space will not impact the state estimate and will increase the test statistic, leading to detection as the fault magnitude increases. Components of bias in the modeled space of H are fully absorbed into state estimate biases, making them undetectable even with large magnitudes. Most faults will have components in both the error and modeled space, causing them to be detected as magnitudes are increased, but also causing estimation biases at smaller magnitudes. [0053] Given a desired missed detection rate (MDR), the test threshold, and the number of measurements, a critical value of the non-centrality value λdet may be computed by iteratively solving for λdet in ae MATLAB function giving a constraint on the possible fault biases: ‖ P E b ‖ 2 ≤ λ det ≡ p det 2 [0054] Under certain circumstances, fault vectors may be constrained. In one exemplary embodiment, the fault vector direction is one of k possible unit vectors:b = Bub u b ∈ u b 1 , u b 1 , … , u b k [0055] Such methodology is used in GPS RAIM, where the bias unit vectors are columns of the identity matrix (indicating a single channel pseudo-range fault) and the worst-case horizontal and vertical biases are used for bias PLs. This approach might be useful for some faults on the head-tracker application, but it is expected that most faults would be at least 2D (pixels) or 3D (fiducials), requiring subspace methods. [0056] In one exemplary embodiment, the fault bias can be selected from a subspace of ℜ 2 N , [0057] To ensure the fault space is bounded and provides a coordinate transform to the fault parameters (β), the eigenvalue decomposition of D should be performed. An eigenvalue decomposition on D may be used to define a coordinate transform: β τ Dβ = β τ V D Λ D V D τ β =Λ D ½ V D τ β TΛ D ½ V D τ β = α T α [0058] The worst-case fault that can occur within this ellipsoid may be defined by maximum horizontal and vertical position biases. These are norms of linear functions of the state vector bias, which also applies to the head-tracker for the various groupings and linear combinations of states. Definingy ∈ ℜN y [0059] This procedure may be repeated for additional outputs of interest. The fault basis should be selected to cover all free parameters expected to vary during the fault. To handle different fault types (that are not expected to occur simultaneously), this method could be repeated for the different faults, and a worst-case or weighted PL may be taken. [0060] The detectability criteria is:D = 1 p det 2 W τ P E W ∈ ℜN f × N f [0061] Assuming four points per fiducial: Number of Fiducials Number of Equations Dimension of Error Space 1 8 2 2 16 10 3 24 18 4 32 26 5 40 32 In addition to requiring a minimum number of equations, detectability requires that no linear combination of fault basis vectors lie within the modeled space. Multiple 2D pixel errors generally are detectable (though PLs will be larger), provided that at least three pixels, required to estimate a head pose, are assumed to be unfaulted. A 3D fiducial point will not be detectable for snapshot pose, since the point can move along the line of sight without generating residuals. However, note that this motion will not create any reprojection error or state bias. To characterize a fault in a 3D fiducial, the fault could be expressed in terms of two detectable dimensions. [0062] Intrinsic calibration errors may be undetectable for certain constellations, as focal length and camera center may be mistaken for camera translation. Extrinsic errors are also not detectable using snapshot optical pose, as they are indistinguishable from head pose. The corner cases of bias detectability can be further explored by considering the output and residual bias equations: y b = CAWβ = C U 1 S 1 − 1V 1 τ U W 1S W 1V W 1 τ β [0063] Two of the main design parameters in the integrity algorithm are FPR and MDR. When selecting FPR and MDR, several factors may be considered. A low FPR is desirable, since that will provide more available measurements, increasing pose estimate availability and accuracy. A low MDR is also desirable, since MDR is multiplied by the a priori fault rate to determine integrity rates. The tradeoff being that lowering FPR increases the fault detection threshold, causing larger biases to escape detection for a given MDR. This requires larger PLs to bound the increased bias errors. If the larger PLs reach HMI alert limits (ALs), the pose estimation is no longer available for integrity applications. In some cases, this tradeoff may be adjusted by accounting for exposure rate of critical phases of flight and/or scheduling requirements. If FPR-MDR-AL tuning cannot reach an acceptable design, camera or constellation changes may be required to reduce sensor noise, increase the number of measurements, reduce DOPs, or improve fault observability. [0064] In at least one embodiment, the kth pixel may have a fault bias; for example, if fiducial detection fails to extract the point correctly or a pixel is faulty. Adding a bias term for the kth pixel:z k − z 0 k = 1 0 0 1 b P k [0065] In at least one embodiment, the kth 3D fiducial point may have a fault bias; for example, if a fiducial has been moved. Adding a bias term for the kth fiducial point:z k − z 0 k = H M k b M k [0066] For a single fiducial point fault hypothesis PL, H 1mp , this process may be computed over each fiducial point of interest and the worst one taken. For a multiple fiducial point fault hypothesis PL, H 2mp+, multiple WP [k] matrices may be added to W. Eventually the fault may become undetectable as more fiducial point faults are added. While the equations to compute H 1mp and H 1p PLs (and similarly H 2p+ H 2mp+) are identical, they may require separate calculations if the FPR/MDR allocations or list of potential failure points are different. [0067] For QR or ArUco fiducials, multiple points may be used from a single fiducial (corners, finders, etc.) In this case, the single point fiducial fault model may not be appropriate, since multiple points on the fiducial are likely to fail simultaneously. In at least one embodiment, assume that all Nmp fiducial points have failed independently, using the H 2mp+ fault hypothesis described herein. W is:W = 0 I 2 I 2 ⋮ I 2 0 ∈ ℜ 2 N × 2 [0068] In another embodiment, only one point per fiducial in the measurement equation is used, reducing the number of available measurements, but also simplifying the measurement correlation and fault modeling. [0069] In another embodiment, coarse pose checks may be used to screen faulty measurements. If a fiducial has shifted significantly, the coarse pose should be off for that fiducial, effectively screening for gross errors. Passing coarse pose checks could be used to restrain the fault space. [0070] In another embodiment, solution separation/RANSAC may be used as a screening tool. Reprojected errors for the faulted fiducial when a pose is computed without the faulted fiducial should be large and a PL may be constructed that would miss detection of this check. [0071] In another embodiment, corrections are made assuming the fiducial has moved as a rigid body, reducing the fault dimension to six or less. [0072] Referring to FIG. 6, a flowchart of a method for determining a head pose according to an exemplary embodiment of the inventive concepts disclosed herein is shown. In at least one embodiment, optical features are extracted. Extraction starts by taking a captured image and applying a series of image processing operations to identify image regions that contain candidate fiducials. Each candidate fiducial is then decoded into a unique fiducial ID. The fiducial IDs are compared against a known fiducial constellation database, which contains the 3D location and ID of each fiducial expected in the platform. [0073] Any detected fiducial IDs that do not match a fiducial in the constellation database may be discarded as an erroneous measurement. After discarding candidate fiducial IDs with no constellation match, the pose estimation function is provided an array of measured fiducial 2D pixel locations and the corresponding 3D location of each fiducial. [0074] The pose estimation function estimates a six DOF head pose (3D position and orientation) by solving an optimization problem, minimizing the reprojection error defined as the difference between the measured 2D fiducial position and the expected fiducial position determined by projecting the 3D fiducial position onto the image plane for a given pose. The pose integrity function uses a first-order expansion of the reprojection error to determine the covariance of the pose estimate. This can be used to determine if the pose estimate is accurate enough to be used in an application. To mitigate potential faults, fault-detection may be performed to determine if the residual error is higher than expected. To further protect against undetected single-point faults, a H1 PL may be computed based on the largest pose bias that would escape detection at a given missed-detection rate. The covariance, combined with the H1 PL, may be used to provide integrity bounds that can be used to assess if the optical pose estimate is adequate for a given high-integrity application. [0075] In at least one embodiment, an optical pose RAIM integrity monitoring process may be characterized by an estimation process 600, a monitoring process 602, and an error overbounding process 604. The estimation process 600 includes a fiducial detection function 610 that receives a captured image 606. The fiducial detection function 610 processes the captured image 606 to determine regions of the image that may contain coded fiducials. The fiducial detection function 610 may use techniques such as edge detection, corner detection, contour detection, etc. There may be a relatively high level of uncertainty associated with false detections (a candidate region is not an actual fiducial) and false identifications (a candidate region is actually a fiducial, but the 2D image region is incorrectly associated to an incorrect fiducial in the 3D fiducial database). Either one of these cases can cause erroneous pose estimates. To mitigate these cases, a fiducial decoding / validating function 612 validates the image regions. [0076] In at least one embodiment, the fiducial decoding / validating function 612 establishes a high probability of correctly pairing 2D image points to 3D model points (correspondence integrity, as more fully described herein). The image region detected by the fiducial detection function 610 is analyzed to extract a binary code, mest, also referred to as the fiducial ID. This fiducial ID estimated from the image is compared to a set of expected fiducial IDs present in the 3D model, also known as the constellation, from 3D model database 608. If the estimated fiducial does not match any of the expected fiducials, the estimate may be marked invalid. A CIE will occur if the estimated fiducial ID is within the constellation but it is not the correct fiducial ID. The probability of a CIE, PCIE, can be designed to be sufficiently low by ensuring the fiducial has enough bits and the fiducial IDs in the constellation are well-spaced. The identified 2D image points and 3D model points are transferred to a pose estimation optimization function 614. [0077] In at least one embodiment, the pose estimation optimization function 614 performs estimation and projection functions outlined herein. Given a pose estimate, x̂, the expected location of the corresponding 2D pixels, p̂, may be determined from the camera properties using perspective projection:p̂(x̂) = f(x̂)The pose estimate may then be determined by adjusting the pose to minimize the reprojection error between the expected pixel location and the measured location, p̃: x ^ = arg min x ‖ p ˜ − p ^ x ^‖ [0078] In at least one embodiment, during a monitoring process 602, a perspective projection function 616 receives the pose estimate from the pose estimation optimization function 614, and constellation data from the 3D model database 608. Perspective projection is used to compute the expected 2D pixel locations based, and the expected 2D pixel locations are used to determine if the pose is accurate. Such projection is then validated by a fault detection/exclusion function 618. [0079] In at least one embodiment, if the pose estimate is correct, the reprojection error is expected to be small. because there is some modeled uncertainty in the measured 2D points and the resulting pose estimate, some small reprojection error is expected. If the reprojection error is larger than would be expected by the measurement error model, the measurements are not consistent with the model. This is unacceptable for a high-integrity solution, since an invalid model may cause pose errors to be larger than predicted by error propagation. Such error could be caused by a subset of faulty measurements, which may be identified and excluded from the solution for increased solution availability. The fault detection/exclusion function 618 is similar to fault detection used in GPS-RAIM. For example, if a Gaussian measurement error model is used, the reprojection errors will follow a Chi-square distribution, and a Chi-square hypothesis test may be used to detect faults. If the pose passes validation tests, an output 620 of the pose, and potentially of fault detection status of the pose are delivered to a corresponding application. [0080] In at least one embodiment, contemporaneous with the monitoring process 602, an error overbounding process 604 includes an error propagation observability function 622 that receives the outputs from the fiducial decoding/validating function 612 and the pose estimate from the pose estimation optimization 614. Using a measurement error model and a first order expansion of the perspective projection function 616, the covariance of the pose estimate may be estimated. The covariance gives information pertaining to the pose estimate accuracy for all six degrees of freedom. Furthermore, it can be used to determine if there is insufficient data to estimate one or more of the pose degrees of freedom. In this case, it may be desirable to reformulate the pose estimation problem to eliminate unobservable degrees of freedom. [0081] In at least one embodiment, a protection level (PL) computation function 624 receives the pose covariance from the error propagation observability function 622 and the output 620 including the fault detection status. PLs are used as an overbound on the pose estimate error, with some small, defined probability of exceedance. The pose covariance may be used to compute a PL under fault-free conditions. In addition, pose errors due to faults may also be included. While the fault detection test may have passed, some faults may go undetected. To account for these faults, the worst-case fault that will not be detected by fault detection (at an allowable missed detection rate) is computed, similarly to GPS-RAIM PL computation, with an extension to subspaces of 2D pixel measurements and 3D model points. [0082] In at least one embodiment, the computed PL is transferred to a pose estimate adequacy function 626. Many applications have requirements for the accuracy of the pose estimate. Given the PLs, the pose estimate adequacy function 626 determines if the PL is below that required accuracy and outputs 628 that determination. If the pose estimate is not good enough, it should not be used for the application. A single pose estimate could be consumed by multiple applications, so it may fail the assessment for one application but pass for a less stringent application. In such circumstance case, the pose estimate may be used by the less stringent application. [0083] While some exemplary embodiments described herein are specifically directed toward head tracking, it may be appreciated that other embodiments for pose tracing and verification are envisioned. For example, a pose of an aircraft on a landing approach may be determined and verified via the methodologies described herein using one or more cameras affixed to the aircraft and runway markings that operate as fiducials. [0084] It is believed that the inventive concepts disclosed herein and many of their attendant advantages will be understood by the foregoing description of embodiments of the inventive concepts disclosed, and it will be apparent that various changes may be made in the form, construction, and arrangement of the components thereof without departing from the broad scope of the inventive concepts disclosed herein or without sacrificing all of their material advantages; and individual features from various embodiments may be combined to arrive at other embodiments. The form herein before described being merely an explanatory embodiment thereof, it is the intention of the following claims to encompass and include such changes. Furthermore, any of the features disclosed in relation to any of the individual embodiments may be incorporated into any other embodiment.
权利要求:
Claims (15) [0001] A computer apparatus comprising: at least one camera (106,108,110); a data storage element (102,104) storing identification data corresponding to a set of fiducials (202,204,206), each comprising coded features distinct from each other; and at least one processor (100) in data communication with the at least one camera, the data storage element, and a memory storing processor executable code for configuring the at least one processor to: receive one or more fiducial images from the camera; detect a plurality of fiducials in the one or more fiducial images; identify the coded features of each of the fiducials; determine an identification of each of the plurality of fiducials by comparing the coded features to the set of fiducials; produce a head pose estimation based on a known relative location and orientation of each of the identified fiducials; produce a projection of a perspective image based on the head pose estimation; compare the perspective image to the one or more fiducial images; and determine a pose estimation fault status. [0002] The computer apparatus of 0, wherein each of the plurality of fiducials comprises a quick response, QR, code. [0003] The computer apparatus of 0, wherein each of the plurality of fiducials comprises an ArUco marker. [0004] The computer apparatus of any preceding Claim, wherein the processor executable code further configures the at least one processor to determine an orientation of each of the plurality of fiducials based on the coded features. [0005] The computer apparatus of any preceding Claim, wherein the processor executable code further configures the at least one processor to: determine a head pose covariance based on the head pose estimation, the plurality of fiducials, and the one or more fiducial images; and determine a protection level based on the head pose covariance and the pose estimation fault status. [0006] The computer apparatus of 0, wherein the processor executable code further configures the at least one processor to assess a head pose adequacy for each of a plurality of applications based on corresponding desired levels of accuracy for each of the plurality of applications. [0007] A method for gauging the accuracy of a head pose estimate comprising: detecting a plurality of fiducials in one or more fiducial images, each of the plurality of fiducials comprising coded features, distinct from each other; identifying coded features of each of the fiducials; determining an identification of each of the plurality of fiducials by comparing the coded features to a set of fiducials; producing a head pose estimation based on a known relative location and orientation of each of the identified fiducials; producing a projection of a perspective image based on the head pose estimation; comparing the perspective image to the one or more fiducial images; and determining a pose estimation fault status. [0008] The method of Claim 7, wherein each of the plurality of fiducials comprises a quick response, QR, code. [0009] The method of Claim 7, wherein each of the plurality of fiducials comprises an ArUco marker. [0010] The method of Claim 7, 8 or 9, further comprising determining an orientation of each of the plurality of fiducials based on the coded features. [0011] The method of Claims 7 to 10, further comprising: determining a head pose covariance based on the head pose estimation, the plurality of fiducials, and the one or more fiducial images; and determining a protection level based on the head pose covariance and the pose estimation fault status. [0012] The method of Claim 11, further comprising assessing a head pose adequacy for each of a plurality of applications based on corresponding desired levels of accuracy for each of the plurality of applications. [0013] An aircraft comprising an apparatus as claimed in any of claims 1 to 6. [0014] The aircraft of Claim 13, wherein the plurality of fiducials is disposed within a cockpit (400) of the aircraft. [0015] The aircraft of Claim 13, wherein the plurality of fiducials is disposed on a helmet (200) of a pilot.
类似技术:
公开号 | 公开日 | 专利标题 US10203209B2|2019-02-12|Resource-aware large-scale cooperative 3D mapping using multiple mobile devices Kim et al.2013|Real-time visual SLAM for autonomous underwater hull inspection using visual saliency US9243916B2|2016-01-26|Observability-constrained vision-aided inertial navigation Tanıl et al.2017|An INS monitor to detect GNSS spoofers capable of tracking vehicle position Li et al.2014|GNSS ambiguity resolution with controllable failure rate for long baseline network RTK Torr et al.1997|The development and comparison of robust methods for estimating the fundamental matrix US8761439B1|2014-06-24|Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit Eustice et al.2005|Visually Navigating the RMS Titanic with SLAM Information Filters. Pirmoradi et al.2009|Fault detection and diagnosis in a spacecraft attitude determination system EP2549288B1|2014-03-19|Identifying true feature matches for vision based navigation US9031782B1|2015-05-12|System to use digital cameras and other sensors in navigation Torr et al.2003|IMPSAC: Synthesis of importance sampling and random sample consensus EP1776600B1|2011-09-07|Signal inconsistency detection of spoofing Huang et al.2011|Observability-based consistent EKF estimators for multi-robot cooperative localization EP2694997B1|2017-01-11|Gnss surveying receiver with multiple rtk engines Mirzaei et al.2008|A Kalman filter-based algorithm for IMU-camera calibration: Observability analysis and performance evaluation US7698094B2|2010-04-13|Position and orientation measurement method and apparatus EP2972475B1|2018-08-15|Iterative kalman filtering US9958549B2|2018-05-01|Methods and apparatus for detecting spoofing of global navigation satellite system signals using carrier phase measurements and known antenna motions EP1868008B1|2018-05-09|Estimate of relative position between navigation units CN102906786B|2015-02-18|Face feature-point position correction device, and face feature-point position correction method US10018729B2|2018-07-10|Selected aspects of advanced receiver autonomous integrity monitoring application to kalman filter based navigation filter EP2068166B1|2012-09-12|Navigation system with apparatus for detecting accuracy failures JP5759161B2|2015-08-05|Object recognition device, object recognition method, learning device, learning method, program, and information processing system Kolomenkin et al.2008|Geometric voting algorithm for star trackers
同族专利:
公开号 | 公开日 US10909715B1|2021-02-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2020-07-03| PUAI| Public reference made under article 153(3) epc to a published international application that has entered the european phase|Free format text: ORIGINAL CODE: 0009012 | 2020-07-03| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED | 2020-08-05| AK| Designated contracting states|Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR | 2020-08-05| AX| Request for extension of the european patent|Extension state: BA ME | 2021-02-12| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE | 2021-03-17| 17P| Request for examination filed|Effective date: 20210205 | 2021-03-17| RBV| Designated contracting states (corrected)|Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR | 2022-02-04| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: EXAMINATION IS IN PROGRESS |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|